Deep neural reasoning
نویسندگان
چکیده
منابع مشابه
Probabilistic Reasoning via Deep Learning: Neural Association Models
In this paper, we propose a new deep learning approach, called neural association model (NAM), for probabilistic reasoning in artificial intelligence. We propose to use neural networks to model association between any two events in a domain. Neural networks take one event as input and compute a conditional probability of the other event to model how likely these two events are associated. The a...
متن کاملDeep-reasoning-centred Dialogue
This paper discusses an implemented dialogue system which generates the meanings of utterances by taking into account: the surface mood of the user’s last utterance; the meanings of all the user’s utterances from the current discourse; the system’s expert knowledge; and the system’s beliefs about the current situation arising from the discourse (including its beliefs about the user and her beli...
متن کاملDeep Learning for Ontology Reasoning
In this work, we present a novel approach to ontology reasoning that is based on deep learning rather than logic-based formal reasoning. To this end, we introduce a new model for statistical relational learning that is built upon deep recursive neural networks, and give experimental evidence that it can easily compete with, or even outperform, existing logic-based reasoners on the task of ontol...
متن کاملNeural Sub-Symbolic Reasoning
The sub-symbolical representation often corresponds to a pattern that mirrors the way the biological sense organs describe the world. Sparse binary vectors can describe sub-symbolic representation, which can be efficiently stored in associative memories. According to the production system theory, we can define a geometrically based problemsolving model as a production system operating on sub-sy...
متن کاملDeep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains its Predictions
Deep neural networks are widely used for classification. These deep models often suffer from a lack of interpretability – they are particularly difficult to understand because of their non-linear nature. As a result, neural networks are often treated as “black box” models, and in the past, have been trained purely to optimize the accuracy of predictions. In this work, we create a novel network ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Nature
سال: 2016
ISSN: 0028-0836,1476-4687
DOI: 10.1038/nature19477